142 research outputs found
Quickest Change Detection of a Markov Process Across a Sensor Array
Recent attention in quickest change detection in the multi-sensor setting has
been on the case where the densities of the observations change at the same
instant at all the sensors due to the disruption. In this work, a more general
scenario is considered where the change propagates across the sensors, and its
propagation can be modeled as a Markov process. A centralized, Bayesian version
of this problem, with a fusion center that has perfect information about the
observations and a priori knowledge of the statistics of the change process, is
considered. The problem of minimizing the average detection delay subject to
false alarm constraints is formulated as a partially observable Markov decision
process (POMDP). Insights into the structure of the optimal stopping rule are
presented. In the limiting case of rare disruptions, we show that the structure
of the optimal test reduces to thresholding the a posteriori probability of the
hypothesis that no change has happened. We establish the asymptotic optimality
(in the vanishing false alarm probability regime) of this threshold test under
a certain condition on the Kullback-Leibler (K-L) divergence between the post-
and the pre-change densities. In the special case of near-instantaneous change
propagation across the sensors, this condition reduces to the mild condition
that the K-L divergence be positive. Numerical studies show that this low
complexity threshold test results in a substantial improvement in performance
over naive tests such as a single-sensor test or a test that wrongly assumes
that the change propagates instantaneously.Comment: 40 pages, 5 figures, Submitted to IEEE Trans. Inform. Theor
Capacity Results for Block-Stationary Gaussian Fading Channels with a Peak Power Constraint
We consider a peak-power-limited single-antenna block-stationary Gaussian
fading channel where neither the transmitter nor the receiver knows the channel
state information, but both know the channel statistics. This model subsumes
most previously studied Gaussian fading models. We first compute the asymptotic
channel capacity in the high SNR regime and show that the behavior of channel
capacity depends critically on the channel model. For the special case where
the fading process is symbol-by-symbol stationary, we also reveal a fundamental
interplay between the codeword length, communication rate, and decoding error
probability. Specifically, we show that the codeword length must scale with SNR
in order to guarantee that the communication rate can grow logarithmically with
SNR with bounded decoding error probability, and we find a necessary condition
for the growth rate of the codeword length. We also derive an expression for
the capacity per unit energy. Furthermore, we show that the capacity per unit
energy is achievable using temporal ON-OFF signaling with optimally allocated
ON symbols, where the optimal ON-symbol allocation scheme may depend on the
peak power constraint.Comment: Submitted to the IEEE Transactions on Information Theor
Data-Efficient Quickest Outlying Sequence Detection in Sensor Networks
A sensor network is considered where at each sensor a sequence of random
variables is observed. At each time step, a processed version of the
observations is transmitted from the sensors to a common node called the fusion
center. At some unknown point in time the distribution of observations at an
unknown subset of the sensor nodes changes. The objective is to detect the
outlying sequences as quickly as possible, subject to constraints on the false
alarm rate, the cost of observations taken at each sensor, and the cost of
communication between the sensors and the fusion center. Minimax formulations
are proposed for the above problem and algorithms are proposed that are shown
to be asymptotically optimal for the proposed formulations, as the false alarm
rate goes to zero. It is also shown, via numerical studies, that the proposed
algorithms perform significantly better than those based on fractional
sampling, in which the classical algorithms from the literature are used and
the constraint on the cost of observations is met by using the outcome of a
sequence of biased coin tosses, independent of the observation process.Comment: Submitted to IEEE Transactions on Signal Processing, Nov 2014. arXiv
admin note: text overlap with arXiv:1408.474
Cooperative Relay Broadcast Channels
The capacity regions are investigated for two relay broadcast channels
(RBCs), where relay links are incorporated into standard two-user broadcast
channels to support user cooperation. In the first channel, the Partially
Cooperative Relay Broadcast Channel, only one user in the system can act as a
relay and transmit to the other user through a relay link. An achievable rate
region is derived based on the relay using the decode-and-forward scheme. An
outer bound on the capacity region is derived and is shown to be tighter than
the cut-set bound. For the special case where the Partially Cooperative RBC is
degraded, the achievable rate region is shown to be tight and provides the
capacity region. Gaussian Partially Cooperative RBCs and Partially Cooperative
RBCs with feedback are further studied. In the second channel model being
studied in the paper, the Fully Cooperative Relay Broadcast Channel, both users
can act as relay nodes and transmit to each other through relay links. This is
a more general model than the Partially Cooperative RBC. All the results for
Partially Cooperative RBCs are correspondingly generalized to the Fully
Cooperative RBCs. It is further shown that the AWGN Fully Cooperative RBC has a
larger achievable rate region than the AWGN Partially Cooperative RBC. The
results illustrate that relaying and user cooperation are powerful techniques
in improving the capacity of broadcast channels.Comment: Submitted to the IEEE Transactions on Information Theory, July 200
Data-Efficient Quickest Change Detection with On-Off Observation Control
In this paper we extend the Shiryaev's quickest change detection formulation
by also accounting for the cost of observations used before the change point.
The observation cost is captured through the average number of observations
used in the detection process before the change occurs. The objective is to
select an on-off observation control policy, that decides whether or not to
take a given observation, along with the stopping time at which the change is
declared, so as to minimize the average detection delay, subject to constraints
on both the probability of false alarm and the observation cost. By considering
a Lagrangian relaxation of the constraint problem, and using dynamic
programming arguments, we obtain an \textit{a posteriori} probability based
two-threshold algorithm that is a generalized version of the classical Shiryaev
algorithm. We provide an asymptotic analysis of the two-threshold algorithm and
show that the algorithm is asymptotically optimal, i.e., the performance of the
two-threshold algorithm approaches that of the Shiryaev algorithm, for a fixed
observation cost, as the probability of false alarm goes to zero. We also show,
using simulations, that the two-threshold algorithm has good observation
cost-delay trade-off curves, and provides significant reduction in observation
cost as compared to the naive approach of fractional sampling, where samples
are skipped randomly. Our analysis reveals that, for practical choices of
constraints, the two thresholds can be set independent of each other: one based
on the constraint of false alarm and another based on the observation cost
constraint alone.Comment: Preliminary version of this paper has been presented at ITA Workshop
UCSD 201
Adaptive Sequential Optimization with Applications to Machine Learning
A framework is introduced for solving a sequence of slowly changing
optimization problems, including those arising in regression and classification
applications, using optimization algorithms such as stochastic gradient descent
(SGD). The optimization problems change slowly in the sense that the minimizers
change at either a fixed or bounded rate. A method based on estimates of the
change in the minimizers and properties of the optimization algorithm is
introduced for adaptively selecting the number of samples needed from the
distributions underlying each problem in order to ensure that the excess risk,
i.e., the expected gap between the loss achieved by the approximate minimizer
produced by the optimization algorithm and the exact minimizer, does not exceed
a target level. Experiments with synthetic and real data are used to confirm
that this approach performs well.Comment: submitted to ICASSP 2016, extended versio
- …